为了纠正PET成像中的呼吸运动,构建了一种可解释和无监督的深度学习技术。对网络进行了训练,以预测不同呼吸幅度范围的两个宠物框架之间的光流。训练有素的模型将不同的回顾性宠物图像对齐,提供了最终图像,其计数统计量与非门控图像相似,但没有模糊的效果。 Flownet-PET应用于拟人化数字幻影数据,该数据提供了设计强大指标以量化校正的可能性。当比较预测的光流与地面真相时,发现中值绝对误差小于像素和切片宽度。通过与没有运动的图像进行比较,并计算肿瘤的联合(IOU)以及在应用校正之前和之后NO-MOTION肿瘤体积内的封闭活性和变异系数(COV)进行比较。网络提供的平均相对改进分别为IOU,总活动和COV的64%,89%和75%。 Fownet-Pet获得了与常规回顾相结合方法相似的结果,但仅需要扫描持续时间的六分之一。代码和数据已公开可用(https://github.com/teaghan/flownet_pet)。
translated by 谷歌翻译
磁共振成像可以产生人体解剖和生理学的详细图像,可以帮助医生诊断和治疗肿瘤等病理。然而,MRI遭受了非常长的收购时间,使其易于患者运动伪影并限制其潜力以提供动态治疗。诸如并行成像和压缩感测的常规方法允许通过使用多个接收器线圈获取更少的MRI数据来改变MR图像来增加MRI采集速度。深度学习的最新进步与平行成像和压缩传感技术相结合,具有从高度加速的MRI数据产生高保真重建。在这项工作中,我们通过利用卷积复发网络的特性和展开算法来解决复发变分网络(RevurrentVarnet)的加速改变网络(RevurrentVarnet)的任务,提出了一种基于深入的深度学习的反问题解决者。 RevurrentVarnet由多个块组成,每个块都负责梯度下降优化算法的一个展开迭代,以解决逆问题。与传统方法相反,优化步骤在观察域($ k $ -space)而不是图像域中进行。每次反复出的Varnet块都会通过观察到的$ k $ -space,并由数据一致性术语和复制单元组成,它将作为输入的隐藏状态和前一个块的预测。我们所提出的方法实现了新的最新状态,定性和定量重建导致来自公共多通道脑数据集的5倍和10倍加速数据,优于以前的传统和基于深度学习的方法。我们将在公共存储库上释放所有型号代码和基线。
translated by 谷歌翻译
尽管几乎每种医学诊断和检查和检查应用中的广泛适应,但磁共振成像(MRI)仍然是慢的成像模态,其限制了其用于动态成像的用途。近年来,已利用平行成像(PI)和压缩传感(CS)加速MRI采集。在临床设置中,使用笛卡尔轨迹(例如直线采样)的扫描时间期间的k空间测量值是目前最常规的CS方法,然而,易于产生锯齿化重建。随着深度学习(DL)参与的出现,在加速MRI时,重建来自离心数据的忠实形象变得越来越有前途。回顾性地将数据采样掩模应用到k空间数据上是模拟真实临床环境中的k空间数据的加速获取的一种方式。在本文中,我们比较并提供审查对由训练的深神经网络输出的重建质量应用的效果进行审查。具有相同的超参数选择,我们训练并评估两个不同的反复推理机(轮辋),一个用于每种类型的重叠采样。我们的实验的定性和定量结果表明,具有径向子采样的数据培训的模型达到了更高的性能,并学会估计具有较高保真度的重建,为其他DL接近涉及径向辐射轮换。
translated by 谷歌翻译
基于深度学习的脑磁共振成像(MRI)重建方法有可能加速MRI采集过程。尽管如此,科学界缺乏适当的基准,以评估高分辨率大脑图像的MRI重建质量,并评估这些所提出的算法在存在小而且预期的数据分布班次存在下的表现。多线圈磁共振图像(MC-MRI)重建挑战提供了一种基准,其目的在于使用高分辨率,三维,T1加权MRI扫描的大型数据集。挑战有两个主要目标:1)比较该数据集和2)上的不同的MRI重建模型,并评估这些模型的概括性,以通过不同数量的接收器线圈获取的数据。在本文中,我们描述了挑战实验设计,并总结了一系列基线和艺术脑MRI重建模型的结果。我们提供有关目前MRI重建最先进的相关比较信息,并突出挑战在更广泛的临床采用之前获得所需的普遍模型。 MC-MRI基准数据,评估代码和当前挑战排行榜可公开可用。它们为脑MRI重建领域的未来发展提供了客观性能评估。
translated by 谷歌翻译
The analysis of network structure is essential to many scientific areas, ranging from biology to sociology. As the computational task of clustering these networks into partitions, i.e., solving the community detection problem, is generally NP-hard, heuristic solutions are indispensable. The exploration of expedient heuristics has led to the development of particularly promising approaches in the emerging technology of quantum computing. Motivated by the substantial hardware demands for all established quantum community detection approaches, we introduce a novel QUBO based approach that only needs number-of-nodes many qubits and is represented by a QUBO-matrix as sparse as the input graph's adjacency matrix. The substantial improvement on the sparsity of the QUBO-matrix, which is typically very dense in related work, is achieved through the novel concept of separation-nodes. Instead of assigning every node to a community directly, this approach relies on the identification of a separation-node set, which -- upon its removal from the graph -- yields a set of connected components, representing the core components of the communities. Employing a greedy heuristic to assign the nodes from the separation-node sets to the identified community cores, subsequent experimental results yield a proof of concept. This work hence displays a promising approach to NISQ ready quantum community detection, catalyzing the application of quantum computers for the network structure analysis of large scale, real world problem instances.
translated by 谷歌翻译
Efficient surrogate modelling is a key requirement for uncertainty quantification in data-driven scenarios. In this work, a novel approach of using Sparse Random Features for surrogate modelling in combination with self-supervised dimensionality reduction is described. The method is compared to other methods on synthetic and real data obtained from crashworthiness analyses. The results show a superiority of the here described approach over state of the art surrogate modelling techniques, Polynomial Chaos Expansions and Neural Networks.
translated by 谷歌翻译
In the era of noisy intermediate scale quantum devices, variational quantum circuits (VQCs) are currently one of the main strategies for building quantum machine learning models. These models are made up of a quantum part and a classical part. The quantum part is given by a parametrization $U$, which, in general, is obtained from the product of different quantum gates. By its turn, the classical part corresponds to an optimizer that updates the parameters of $U$ in order to minimize a cost function $C$. However, despite the many applications of VQCs, there are still questions to be answered, such as for example: What is the best sequence of gates to be used? How to optimize their parameters? Which cost function to use? How the architecture of the quantum chips influences the final results? In this article, we focus on answering the last question. We will show that, in general, the cost function will tend to a typical average value the closer the parameterization used is from a $2$-design. Therefore, the closer this parameterization is to a $2$-design, the less the result of the quantum neural network model will depend on its parametrization. As a consequence, we can use the own architecture of the quantum chips to defined the VQC parametrization, avoiding the use of additional swap gates and thus diminishing the VQC depth and the associated errors.
translated by 谷歌翻译
Recent trends in language modeling have focused on increasing performance through scaling, and have resulted in an environment where training language models is out of reach for most researchers and practitioners. While most in the community are asking how to push the limits of extreme computation, we ask the opposite question: How far can we get with a single GPU in just one day? We investigate the downstream performance achievable with a transformer-based language model trained completely from scratch with masked language modeling for a single day on a single consumer GPU. Aside from re-analyzing nearly all components of the pretraining pipeline for this scenario and providing a modified pipeline with performance close to BERT, we investigate why scaling down is hard, and which modifications actually improve performance in this scenario. We provide evidence that even in this constrained setting, performance closely follows scaling laws observed in large-compute settings. Through the lens of scaling laws, we categorize a range of recent improvements to training and architecture and discuss their merit and practical applicability (or lack thereof) for the limited compute setting.
translated by 谷歌翻译
This short paper discusses continually updated causal abstractions as a potential direction of future research. The key idea is to revise the existing level of causal abstraction to a different level of detail that is both consistent with the history of observed data and more effective in solving a given task.
translated by 谷歌翻译
State-of-the-art poetry generation systems are often complex. They either consist of task-specific model pipelines, incorporate prior knowledge in the form of manually created constraints or both. In contrast, end-to-end models would not suffer from the overhead of having to model prior knowledge and could learn the nuances of poetry from data alone, reducing the degree of human supervision required. In this work, we investigate end-to-end poetry generation conditioned on styles such as rhyme, meter, and alliteration. We identify and address lack of training data and mismatching tokenization algorithms as possible limitations of past attempts. In particular, we successfully pre-train and release ByGPT5, a new token-free decoder-only language model, and fine-tune it on a large custom corpus of English and German quatrains annotated with our styles. We show that ByGPT5 outperforms other models such as mT5, ByT5, GPT-2 and ChatGPT, while also being more parameter efficient and performing favorably compared to humans. In addition, we analyze its runtime performance and introspect the model's understanding of style conditions. We make our code, models, and datasets publicly available.
translated by 谷歌翻译